IMPROVING GAUSSIAN MIXTURE DENSITY ESTIMATES 1 Averaging
نویسندگان
چکیده
We apply the idea of averaging ensembles of estimators to probability density estimation. In particular we use Gaussian mixture models which are important components in many neural network applications. One variant of averaging is Breiman's \bagging", which recently produced impressive results in classiication tasks. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i.e. a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an EM algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximumlikelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low-dimensional toy problem but failed to give good performance in higher-dimensional problems.
منابع مشابه
Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging
Volker Tresp Siemens AG Central Research 81730 Munchen, Germany Volker. [email protected] We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method uses a Bayesian prior on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probabili...
متن کاملImproved Gaussian Mixture Density
We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The rst method consists of deening a Bayesian prior distribution on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability in contrast to the usual EM rules for Gaussian mixtures which ma...
متن کاملNovel Radial Basis Function Neural Networks based on Probabilistic Evolutionary and Gaussian Mixture Model for Satellites Optimum Selection
In this study, two novel learning algorithms have been applied on Radial Basis Function Neural Network (RBFNN) to approximate the functions with high non-linear order. The Probabilistic Evolutionary (PE) and Gaussian Mixture Model (GMM) techniques are proposed to significantly minimize the error functions. The main idea is concerning the various strategies to optimize the procedure of Gradient ...
متن کاملSpacecraft Attitude Estimation Using Adaptive Gaussian Sum Filter
ABSTRACT This paper is concerned with improving the attitude estimation accuracy by implementing an adaptive Gaussian sum filter where the a posteriori density function is approximated by a sum of Gaussian density functions. Compared to the traditional Gaussian sum filter, this adaptive approach utilizes the Fokker-Planck-Kolmogorov residual minimization to update the weights associated with di...
متن کاملSpeech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering
Gaussian Mixture Models (GMMs) of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD) or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equatio...
متن کامل